Way-prediction (WP) caches have advantages of reducing power consumption and latency for highly associative\ndata caches and thus are favorable for embedded systems. In this paper, we propose an enhanced way-prediction\ncache, dual-access way-prediction (DAWP) cache, to cope with the weakness of the WP cache. The prediction logic\ndesigned for the DAWP cache contains a scaled index table, a global history register, and a fully associative cache\nto achieve higher prediction accuracy, which eventually yields less energy consumption and latency. In our practice,\nperformance measurement is done with a simulation model, which is implemented with SimpleScalar and CACTI,\nand nine SPEC2000 benchmark programs. Our experimental results show that the proposed DAWP cache is highly\nefficient in power and latency for highly associative cache structures. The efficiency is increased with the increasing\nassociativity, and the testing results with 64 KB cache show that the DAWP cache achieves 16.45% ~ 75.85% power\ngain and 4.91% ~ 26.96% latency gain for 2-way ~ 32-way structures, respectively. It is also observed that the random\nreplacement policy yields better efficiency in power and latency than the LRU (least recently used) policy with the\nDAWP cache.
Loading....